Candidates for Synergies: Linear Discriminants versus Principal Components
نویسندگان
چکیده
منابع مشابه
Candidates for Synergies: Linear Discriminants versus Principal Components
Movement primitives or synergies have been extracted from human hand movements using several matrix factorization, dimensionality reduction, and classification methods. Principal component analysis (PCA) is widely used to obtain the first few significant eigenvectors of covariance that explain most of the variance of the data. Linear discriminant analysis (LDA) is also used as a supervised lear...
متن کاملOn Combining Principal Components with Fisher’s Linear Discriminants for Supervised Learning
“The curse of dimensionality” is pertinent to many learning algorithms, and it denotes the drastic increase of computational complexity and classification error in high dimensions. In this paper, principal component analysis (PCA), parametric feature extraction (FE) based on Fisher’s linear discriminant analysis (LDA), and their combination as means of dimensionality reduction are analysed with...
متن کاملA Doppler-Based Target Classifier Using Linear Discriminants and Principal Components
This paper describes the design of the automatic target classifier which has been introduced into the AMSTAR Battlefield Surveillance Radar. It discusses the requirements which have driven the design of the classifier, the data which is used to make the classification, the choice of Linear Discriminant Analysis as one of the classification techniques used and the use of Principal Components Ana...
متن کاملPrincipal Components Versus Principal Axis Factoring
Note that SPSS does not provide statistical significance tests for any of the estimated parameters (such as loadings), nor does it provide confidence intervals. Judgments about the adequacy of a oneor two-component model are not made based on statistical significance tests, but by making arbitrary judgments whether the model that is limited to just one or two components does an adequate job of ...
متن کاملSOM: Stochastic initialization versus principal components
Selection of a good initial approximation is a well known problem for all iterative methods of data approximation, from k -means to Self-Organizing Maps (SOM) and manifold learning. The quality of the resulting data approximation depends on the initial approximation. Principal components are popular as an initial approximation for many methods of nonlinear dimensionality reduction because its c...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Intelligence and Neuroscience
سال: 2014
ISSN: 1687-5265,1687-5273
DOI: 10.1155/2014/373957